• Incremental Learning in Diagonal Linear Networks

    Updated: 2023-09-30 20:28:19
    Diagonal linear networks (DLNs) are a toy simplification of artificial neural networks; they consist in a quadratic reparametrization of linear regression inducing a sparse implicit regularization. In this paper, we describe the trajectory of the gradient flow of DLNs in the limit of small initialization. We show that incremental learning is effectively performed in the limit: coordinates are successively activated, while the iterate is the minimizer of the loss constrained to have support on the active coordinates only. This shows that the sparse implicit regularization of DLNs decreases with time. This work is restricted to the underparametrized regime with anti-correlated features for technical reasons.

  • DART: Distance Assisted Recursive Testing

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us DART : Distance Assisted Recursive Testing Xuechan Li , Anthony D . Sung , Jichun Xie 24(169 1 41, 2023. Abstract Multiple testing is a commonly used tool in modern data science . Sometimes , the hypotheses are embedded in a space the distances between the hypotheses reflect their co-null co- alternative patterns . Properly incorporating the distance information in testing will boost testing power . Hence , we developed a new multiple testing framework named Distance Assisted Recursive Testing DART DART features in joint artificial intelligence AI and statistics modeling . It has two stages . The

  • Robust Methods for High-Dimensional Linear Learning

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Robust Methods for High-Dimensional Linear Learning Ibrahim Merad , Stéphane Gaïffas 24(165 1 44, 2023. Abstract We propose statistically robust and computationally efficient linear learning methods in the high-dimensional batch setting , where the number of features d$ may exceed the sample size n$ . We employ , in a generic learning setting , two algorithms depending on whether the considered loss function is gradient-Lipschitz or not . Then , we instantiate our framework on several applications including vanilla sparse , group-sparse and low-rank matrix recovery . This leads , for each application

  • A Framework and Benchmark for Deep Batch Active Learning for Regression

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us A Framework and Benchmark for Deep Batch Active Learning for Regression David Holzmüller , Viktor Zaverkin , Johannes Kästner , Ingo Steinwart 24(164 1 81, 2023. Abstract The acquisition of labels for supervised learning can be expensive . To improve the sample efficiency of neural network regression , we study active learning methods that adaptively select batches of unlabeled data for labeling . We present a framework for constructing such methods out of network-dependent base kernels , kernel transformations , and selection methods . Our framework encompasses many existing Bayesian methods based

  • Flexible Model Aggregation for Quantile Regression

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Flexible Model Aggregation for Quantile Regression Rasool Fakoor , Taesup Kim , Jonas Mueller , Alexander J . Smola , Ryan J . Tibshirani 24(162 1 45, 2023. Abstract Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions , or to model a diverse population without being overly reductive . For instance , epidemiological forecasts , cost estimates , and revenue predictions all benefit from being able to quantify the range of possible values accurately . As such , many models have been developed for this problem over many years of

  • Multi-source Learning via Completion of Block-wise Overlapping Noisy Matrices

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Multi-source Learning via Completion of Block-wise Overlapping Noisy Matrices Doudou Zhou , Tianxi Cai , Junwei Lu 24(221 1 43, 2023. Abstract Electronic healthcare records EHR provide a rich resource for healthcare research . An important problem for the efficient utilization of the EHR data is the representation of the EHR features , which include the unstructured clinical narratives and the structured codified data . Matrix factorization-based embeddings trained using the summary-level co-occurrence statistics of EHR data have provided a promising solution for feature representation while

  • A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning Wei-Fang Sun , Cheng-Kuang Lee , Simon See , Chun-Yi Lee 24(220 1 32, 2023. Abstract In fully cooperative multi-agent reinforcement learning MARL settings , environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of other agents . To address the above issues , we proposed a unified framework , called DFAC , for integrating distributional RL with value function factorization methods . This framework generalizes expected value

  • Multivariate Soft Rank via Entropy-Regularized Optimal Transport: Sample Efficiency and Generative Modeling

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Multivariate Soft Rank via Entropy-Regularized Optimal Transport : Sample Efficiency and Generative Modeling Shoaib Bin Masud , Matthew Werenski , James M . Murphy , Shuchin Aeron 24(160 1 65, 2023. Abstract The framework of optimal transport has been leveraged to extend the notion of rank to the multivariate setting as corresponding to an optimal transport map , while preserving desirable properties of the resulting goodness-of-fit GoF statistics . In particular , the rank energy RE and rank maximum mean discrepancy RMMD are distribution-free under the null , exhibit high power in statistical

  • Functional L-Optimality Subsampling for Functional Generalized Linear Models with Massive Data

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Functional L-Optimality Subsampling for Functional Generalized Linear Models with Massive Data Hua Liu , Jinhong You , Jiguo Cao 24(219 1 41, 2023. Abstract Massive data bring the big challenges of memory and computation for analysis . These challenges can be tackled by taking subsamples from the full data as a surrogate . For functional data , it is common to collect multiple measurements over their domains , which require even more memory and computation time when the sample size is large . The computation would be much more intensive when statistical inference is required through bootstrap samples

  • Adaptation Augmented Model-based Policy Optimization

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Adaptation Augmented Model-based Policy Optimization Jian Shen , Hang Lai , Minghuan Liu , Han Zhao , Yong Yu , Weinan Zhang 24(218 1 35, 2023. Abstract Compared to model-free reinforcement learning RL model-based RL is often more sample efficient by leveraging a learned dynamics model to help decision making . However , the learned model is usually not perfectly accurate and the error will compound in multi-step predictions , which can lead to poor asymptotic performance . In this paper , we first derive an upper bound of the return discrepancy between the real dynamics and the learned model , which

  • Random Forests for Change Point Detection

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Random Forests for Change Point Detection Malte Londschien , Peter Bühlmann , Solt Kovács 24(216 1 45, 2023. Abstract We propose a novel multivariate nonparametric multiple change point detection method using classifiers . We construct a classifier log-likelihood ratio that uses class probability predictions to compare different change point configurations . We propose a computationally feasible search method that is particularly well suited for random forests , denoted by changeforest . However , the method can be paired with any classifier that yields class probability predictions , which we

  • Least Squares Model Averaging for Distributed Data

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Least Squares Model Averaging for Distributed Data Haili Zhang , Zhaobo Liu , Guohua Zou 24(215 1 59, 2023. Abstract Divide and conquer algorithm is a common strategy applied in big data . Model averaging has the natural divide-and-conquer feature , but its theory has not been developed in big data scenarios . The goal of this paper is to fill this gap . We propose two divide-and-conquer-type model averaging estimators for linear models with distributed data . Under some regularity conditions , we show that the weights from Mallows model averaging criterion converge in L2 to the theoretically optimal

  • Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent DAGs with Applications

    Updated: 2023-09-30 20:28:19
    Counting and sampling directed acyclic graphs from a Markov equivalence class are fundamental tasks in graphical causal analysis. In this paper we show that these tasks can be performed in polynomial time, solving a long-standing open problem in this area. Our algorithms are effective and easily implementable. As we show in experiments, these breakthroughs make thought-to-be-infeasible strategies in active learning of causal structures and causal effect identification with regard to a Markov equivalence class practically applicable.

  • LibMTL: A Python Library for Deep Multi-Task Learning

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us LibMTL : A Python Library for Deep Multi-Task Learning Baijiong Lin , Yu Zhang 24(209 1 7, 2023. Abstract This paper presents LibMTL , an open-source Python library built on PyTorch , which provides a unified , comprehensive , reproducible , and extensible implementation framework for Multi-Task Learning MTL LibMTL considers different settings and approaches in MTL , and it supports a large number of state-of-the-art MTL methods , including 13 optimization strategies and 8 architectures . Moreover , the modular design in LibMTL makes it easy to use and well-extensible , thus users can easily and

  • Minimax Risk Classifiers with 0-1 Loss

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Minimax Risk Classifiers with 0-1 Loss Santiago Mazuelas , Mauricio Romero , Peter Grunwald 24(208 1 48, 2023. Abstract Supervised classification techniques use training samples to learn a classification rule with small expected 0-1 loss error probability Conventional methods enable tractable learning and provide out-of-sample generalization by using surrogate losses instead of the 0-1 loss and considering specific families of rules hypothesis classes This paper presents minimax risk classifiers MRCs that minimize the worst-case 0-1 loss with respect to uncertainty sets of distributions that can

  • Augmented Sparsifiers for Generalized Hypergraph Cuts

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Augmented Sparsifiers for Generalized Hypergraph Cuts Nate Veldt , Austin R . Benson , Jon Kleinberg 24(207 1 50, 2023. Abstract Hypergraph generalizations of many graph cut problems and algorithms have recently been introduced to better model data and systems characterized by multiway relationships . Recent work in machine learning and theoretical computer science uses a generalized cut function for a hypergraph mathcal{H V , mathcal{E that associates each hyperedge e in mathcal{E with a splitting function bf w e$ , which assigns a penalty to each way of separating the nodes of e$ . When each bf w

  • L0Learn: A Scalable Package for Sparse Learning using L0 Regularization

    Updated: 2023-09-30 20:28:19
    We present L0Learn: an open-source package for sparse linear regression and classification using $\ell_0$ regularization. L0Learn implements scalable, approximate algorithms, based on coordinate descent and local combinatorial optimization. The package is built using C++ and has user-friendly R and Python interfaces. L0Learn can address problems with millions of features, achieving competitive run times and statistical performance with state-of-the-art sparse learning packages. L0Learn is available on both CRAN and GitHub.

  • A Non-parametric View of FedAvg and FedProx:Beyond Stationary Points

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us A Non-parametric View of FedAvg and FedProx:Beyond Stationary Points Lili Su , Jiaming Xu , Pengkun Yang 24(203 1 48, 2023. Abstract Federated Learning FL is a promising decentralized learning framework and has great potentials in privacy preservation and in lowering the computation load at the cloud . Recent work showed that FedAvg and FedProx the two widely-adopted FL algorithms fail to reach the stationary points of the global optimization objective even for homogeneous linear regression problems . Further , it is concerned that the common model learned might not generalize well locally at all in

  • Variational Inverting Network for Statistical Inverse Problems of Partial Differential Equations

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Variational Inverting Network for Statistical Inverse Problems of Partial Differential Equations Junxiong Jia , Yanni Wu , Peijun Li , Deyu Meng 24(201 1 60, 2023. Abstract To quantify uncertainties in inverse problems of partial differential equations PDEs we formulate them into statistical inference problems using Bayes' formula . Recently , well-justified infinite-dimensional Bayesian analysis methods have been developed to construct dimension-independent algorithms . However , there are three challenges for these infinite-dimensional Bayesian methods : prior measures usually act as regularizers

  • Model-based Causal Discovery for Zero-Inflated Count Data

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Model-based Causal Discovery for Zero-Inflated Count Data Junsouk Choi , Yang Ni 24(200 1 32, 2023. Abstract Zero-inflated count data arise in a wide range of scientific areas such as social science , biology , and genomics . Very few causal discovery approaches can adequately account for excessive zeros as well as various features of multivariate count data such as overdispersion . In this paper , we propose a new zero-inflated generalized hypergeometric directed acyclic graph ZiG-DAG model for inference of causal structure from purely observational zero-inflated count data . The proposed ZiG-DAGs

  • CodaLab Competitions: An Open Source Platform to Organize Scientific Challenges

    Updated: 2023-09-30 20:28:19
    CodaLab Competitions is an open source web platform designed to help data scientists and research teams to crowd-source the resolution of machine learning problems through the organization of competitions, also called challenges or contests. CodaLab Competitions provides useful features such as multiple phases, results and code submissions, multi-score leaderboards, and jobs running inside Docker containers. The platform is very flexible and can handle large scale experiments, by allowing organizers to upload large datasets and provide their own CPU or GPU compute workers.

  • Variational Gibbs Inference for Statistical Model Estimation from Incomplete Data

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Variational Gibbs Inference for Statistical Model Estimation from Incomplete Data Vaidotas Simkus , Benjamin Rhodes , Michael U . Gutmann 24(196 1 72, 2023. Abstract Statistical models are central to machine learning with broad applicability across a range of downstream tasks . The models are controlled by free parameters that are typically estimated from data by maximum-likelihood estimation or approximations thereof . However , when faced with real-world data sets many of the models run into a critical issue : they are formulated in terms of fully-observed data , whereas in practice the data sets

  • Clustering and Structural Robustness in Causal Diagrams

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Clustering and Structural Robustness in Causal Diagrams Santtu Tikka , Jouni Helske , Juha Karvanen 24(195 1 32, 2023. Abstract Graphs are commonly used to represent and visualize causal relations . For a small number of variables , this approach provides a succinct and clear view of the scenario at hand . As the number of variables under study increases , the graphical approach may become impractical , and the clarity of the representation is lost . Clustering of variables is a natural way to reduce the size of the causal diagram , but it may erroneously change the essential properties of the causal

  • Insights into Ordinal Embedding Algorithms: A Systematic Evaluation

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Insights into Ordinal Embedding Algorithms : A Systematic Evaluation Leena Chennuru Vankadara , Michael Lohaus , Siavash Haghiri , Faiz Ul Wahab , Ulrike von Luxburg 24(191 1 83, 2023. Abstract The objective of ordinal embedding is to find a Euclidean representation of a set of abstract items , using only answers to triplet comparisons of the form Is item i$ closer to item j$ or item k$ In recent years , numerous algorithms have been proposed to solve this problem . However , there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain

  • Clustering with Tangles: Algorithmic Framework and Theoretical Guarantees

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Clustering with Tangles : Algorithmic Framework and Theoretical Guarantees Solveig Klepper , Christian Elbracht , Diego Fioravanti , Jakob Kneip , Luca Rendsburg , Maximilian Teegen , Ulrike von Luxburg 24(190 1 56, 2023. Abstract Originally , tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem . In this paper , we showcase the practical potential of tangles in machine learning applications . Given a collection of cuts of any dataset , tangles aggregate these cuts to point in the direction of a dense structure . As a result , a cluster is

  • The Proximal ID Algorithm

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us The Proximal ID Algorithm Ilya Shpitser , Zach Wood-Doughty , Eric J . Tchetgen Tchetgen 24(188 1 46, 2023. Abstract Unobserved confounding is a fundamental obstacle to establishing valid causal conclusions from observational data . Two complementary types of approaches have been developed to address this obstacle : obtaining identification using fortuitous external aids , such as instrumental variables or proxies , or by means of the ID algorithm , using Markov restrictions on the full data distribution encoded in graphical causal models . In this paper we aim to develop a synthesis of the former

  • Quantifying Network Similarity using Graph Cumulants

    Updated: 2023-09-30 20:28:19
    How might one test the hypothesis that networks were sampled from the same distribution? Here, we compare two statistical tests that use subgraph counts to address this question. The first uses the empirical subgraph densities themselves as estimates of those of the underlying distribution. The second test uses a new approach that converts these subgraph densities into estimates of the graph cumulants of the distribution (without any increase in computational complexity). We demonstrate --- via theory, simulation, and application to real data --- the superior statistical power of using graph cumulants. In summary, when analyzing data using subgraph/motif densities, we suggest using the corresponding graph cumulants instead.

  • Learning an Explicit Hyper-parameter Prediction Function Conditioned on Tasks

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Learning an Explicit Hyper-parameter Prediction Function Conditioned on Tasks Jun Shu , Deyu Meng , Zongben Xu 24(186 1 74, 2023. Abstract Meta learning has attracted much attention recently in machine learning community . Contrary to conventional machine learning aiming to learn inherent prediction rules to predict labels for new query data , meta learning aims to learn the learning methodology for machine learning from observed tasks , so as to generalize to new query tasks by leveraging the meta-learned learning methodology . In this study , we achieve such learning methodology by learning an

  • On the Theoretical Equivalence of Several Trade-Off Curves Assessing Statistical Proximity

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us On the Theoretical Equivalence of Several Trade-Off Curves Assessing Statistical Proximity Rodrigue Siry , Ryan Webster , Loic Simon , Julien Rabin 24(185 1 34, 2023. Abstract The recent advent of powerful generative models has triggered the renewed development of quantitative measures to assess the proximity of two probability distributions . As the scalar Frechet Inception Distance remains popular , several methods have explored computing entire curves , which reveal the trade-off between the fidelity and variability of the first distribution with respect to the second one . Several of such

  • Factor Graph Neural Networks

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Factor Graph Neural Networks Zhen Zhang , Mohammed Haroon Dupty , Fan Wu , Javen Qinfeng Shi , Wee Sun Lee 24(181 1 54, 2023. Abstract In recent years , we have witnessed a surge of Graph Neural Networks GNNs most of which can learn powerful representations in an end-to-end fashion with great success in many real-world applications . They have resemblance to Probabilistic Graphical Models PGMs but break free from some limitations of PGMs . By aiming to provide expressive methods for representation learning instead of computing marginals or most likely configurations , GNNs provide flexibility in the

  • Comprehensive Algorithm Portfolio Evaluation using Item Response Theory

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Comprehensive Algorithm Portfolio Evaluation using Item Response Theory Sevvandi Kandanaarachchi , Kate Smith-Miles 24(177 1 52, 2023. Abstract Item Response Theory IRT has been proposed within the field of Educational Psychometrics to assess student ability as well as test question difficulty and discrimination power . More recently , IRT has been applied to evaluate machine learning algorithm performance on a single classification dataset , where the student is now an algorithm , and the test question is an observation to be classified by the algorithm . In this paper we present a modified

  • Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks Khurram Javed , Haseeb Shah , Richard S . Sutton , Martha White 24(256 1 34, 2023. Abstract Constructing states from sequences of observations is an important component of reinforcement learning agents . One solution for state construction is to use recurrent neural networks . Back-propagation through time BPTT and real-time recurrent learning RTRL are two popular gradient-based methods for recurrent learning . BPTT requires complete trajectories of observations before it can compute the gradients and is unsuitable for online

  • Torchhd: An Open Source Python Library to Support Research on Hyperdimensional Computing and Vector Symbolic Architectures

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Torchhd : An Open Source Python Library to Support Research on Hyperdimensional Computing and Vector Symbolic Architectures Mike Heddes , Igor Nunes , Pere Vergés , Denis Kleyko , Danny Abraham , Tony Givargis , Alexandru Nicolau , Alexander Veidenbaum 24(255 1 10, 2023. Abstract Hyperdimensional computing HD also known as vector symbolic architectures VSA is a framework for computing with distributed representations by exploiting properties of random high-dimensional vector spaces . The commitment of the scientific community to aggregate and disseminate research in this particularly

  • Adaptive False Discovery Rate Control with Privacy Guarantee

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Adaptive False Discovery Rate Control with Privacy Guarantee Xintao Xia , Zhanrui Cai 24(252 1 35, 2023. Abstract Differentially private multiple testing procedures can protect the information of individuals used in hypothesis tests while guaranteeing a small fraction of false discoveries . In this paper , we propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level alpha$ with a privacy guarantee , which is a non-trivial improvement compared to the differentially private Benjamini-Hochberg method proposed in Dwork et al .

  • Convex Reinforcement Learning in Finite Trials

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Convex Reinforcement Learning in Finite Trials Mirco Mutti , Riccardo De Santi , Piersilvio De Bartolomeis , Marcello Restelli 24(250 1 42, 2023. Abstract Convex Reinforcement Learning RL is a recently introduced framework that generalizes the standard RL objective to any convex or concave function of the state distribution induced by the agent's policy . This framework subsumes several applications of practical interest , such as pure exploration , imitation learning , and risk-averse RL , among others . However , the previous convex RL literature implicitly evaluates the agent's performance over

  • Graph Attention Retrospective

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Graph Attention Retrospective Kimon Fountoulakis , Amit Levi , Shenghao Yang , Aseem Baranwal , Aukosh Jagannath 24(246 1 52, 2023. Abstract Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks , citation networks , and bioinformatics . One of the most popular models is graph attention networks . They were introduced to allow a node to aggregate information from features of neighbor nodes in a non-uniform way , in contrast to simple graph convolution which does not distinguish the neighbors of a node . In this paper , we theoretically study the

  • Confidence Intervals and Hypothesis Testing for High-dimensional Quantile Regression: Convolution Smoothing and Debiasing

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Confidence Intervals and Hypothesis Testing for High-dimensional Quantile Regression : Convolution Smoothing and Debiasing Yibo Yan , Xiaozhou Wang , Riquan Zhang 24(245 1 49, 2023. Abstract ell_1$-penalized quantile regression ell_1$-QR is a useful tool for modeling the relationship between input and output variables when detecting heterogeneous effects in the high-dimensional setting . Hypothesis tests can then be formulated based on the debiased ell_1$-QR estimator that reduces the bias induced by Lasso penalty . However , the non-smoothness of the quantile loss brings great challenges to the

  • Selection by Prediction with Conformal p-values

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Selection by Prediction with Conformal p-values Ying Jin , Emmanuel J . Candes 24(244 1 41, 2023. Abstract Decision making or scientific discovery pipelines such as job hiring and drug discovery often involve multiple stages : before any resource-intensive step , there is often an initial screening that uses predictions from a machine learning model to shortlist a few candidates from a large pool . We study screening procedures that aim to select candidates whose unobserved outcomes exceed user-specified values . We develop a method that wraps around any prediction model to produce a subset of

  • Sparse Graph Learning from Spatiotemporal Time Series

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Sparse Graph Learning from Spatiotemporal Time Series Andrea Cini , Daniele Zambon , Cesare Alippi 24(242 1 36, 2023. Abstract Outstanding achievements of graph neural networks for spatiotemporal time series analysis show that relational constraints introduce an effective inductive bias into neural forecasting architectures . Often , however , the relational information characterizing the underlying data-generating process is unavailable and the practitioner is left with the problem of inferring from data which relational graph to use in the subsequent processing stages . We propose novel ,

  • Improved Powered Stochastic Optimization Algorithms for Large-Scale Machine Learning

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Improved Powered Stochastic Optimization Algorithms for Large-Scale Machine Learning Zhuang Yang 24(241 1 29, 2023. Abstract Stochastic optimization , especially stochastic gradient descent SGD is now the workhorse for the vast majority of problems in machine learning . Various strategies , e.g . control variates , adaptive learning rate , momentum technique , etc . have been developed to improve canonical SGD that is of a low convergence rate and the poor generalization in practice . Most of these strategies improve SGD that can be attributed to control the updating direction e.g . gradient descent

  • Efficient Computation of Rankings from Pairwise Comparisons

    Updated: 2023-09-30 20:28:19
    We study the ranking of individuals, teams, or objects, based on pairwise comparisons between them, using the Bradley-Terry model. Estimates of rankings within this model are commonly made using a simple iterative algorithm first introduced by Zermelo almost a century ago. Here we describe an alternative and similarly simple iteration that provably returns identical results but does so much faster—over a hundred times faster in some cases. We demonstrate this algorithm with applications to a range of example data sets and derive a number of results regarding its convergence.

  • Scalable Computation of Causal Bounds

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Scalable Computation of Causal Bounds Madhumitha Shridharan , Garud Iyengar 24(237 1 35, 2023. Abstract We consider the problem of computing bounds for causal queries on causal graphs with unobserved confounders and discrete valued observed variables , where identifiability does not hold . Existing non-parametric approaches for computing such bounds use linear programming LP formulations that quickly become intractable for existing solvers because the size of the LP grows exponentially in the number of edges in the causal graph . We show that this LP can be significantly pruned , allowing us to

  • MultiZoo and MultiBench: A Standardized Toolkit for Multimodal Deep Learning

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us MultiZoo and MultiBench : A Standardized Toolkit for Multimodal Deep Learning Paul Pu Liang , Yiwei Lyu , Xiang Fan , Arav Agarwal , Yun Cheng , Louis-Philippe Morency , Ruslan Salakhutdinov 24(234 1 7, 2023. Abstract Learning multimodal representations involves integrating information from multiple heterogeneous sources of data . In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness , we release MultiZoo , a public toolkit consisting of standardized implementations of 20 core multimodal algorithms and MultiBench , a large-scale benchmark

  • Strategic Knowledge Transfer

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Strategic Knowledge Transfer Max Olan Smith , Thomas Anthony , Michael P . Wellman 24(233 1 96, 2023. Abstract In the course of playing or solving a game , it is common to face a series of changing other-agent strategies . These strategies often share elements : the set of possible policies to play has overlap , and the policies are sampled at the beginning of play by possibly differing distributions . As it faces the series of strategies , therefore , an agent has the opportunity to transfer its learned play against the previously encountered other-agent policies . We tackle two problems : 1 how can

  • Statistical Comparisons of Classifiers by Generalized Stochastic Dominance

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Statistical Comparisons of Classifiers by Generalized Stochastic Dominance Christoph Jansen , Malte Nalenz , Georg Schollmeyer , Thomas Augustin 24(231 1 37, 2023. Abstract Although being a crucial question for the development of machine learning algorithms , there is still no consensus on how to compare classifiers over multiple data sets with respect to several criteria . Every comparison framework is confronted with at least three fundamental challenges : the multiplicity of quality criteria , the multiplicity of data sets and the randomness of the selection of data sets . In this paper , we add a

  • Sample Complexity for Distributionally Robust Learning under chi-square divergence

    Updated: 2023-09-30 20:28:19
    This paper investigates the sample complexity of learning a distributionally robust predictor under a particular distributional shift based on $\chi^2$-divergence, which is well known for its computational feasibility and statistical properties. We demonstrate that any hypothesis class $\mathcal{H}$ with finite VC dimension is distributionally robustly learnable. Moreover, we show that when the perturbation size is smaller than a constant, finite VC dimension is also necessary for distributionally robust learning by deriving a lower bound of sample complexity in terms of VC dimension.

  • Interpretable and Fair Boolean Rule Sets via Column Generation

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Interpretable and Fair Boolean Rule Sets via Column Generation Connor Lawless , Sanjeeb Dash , Oktay Gunluk , Dennis Wei 24(229 1 50, 2023. Abstract This paper considers the learning of Boolean rules in disjunctive normal form DNF , OR-of-ANDs , equivalent to decision rule sets as an interpretable model for classification . An integer program is formulated to optimally trade classification accuracy for rule simplicity . We also consider the fairness setting and extend the formulation to include explicit constraints on two different measures of classification parity : equality of opportunity and

  • Autoregressive Networks

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Autoregressive Networks Binyan Jiang , Jialiang Li , Qiwei Yao 24(227 1 69, 2023. Abstract We propose a first-order autoregressive i.e . AR(1 model for dynamic network processes in which edges change over time while nodes remain unchanged . The model depicts the dynamic changes explicitly . It also facilitates simple and efficient statistical inference methods including a permutation test for diagnostic checking for the fitted network models . The proposed model can be applied to the network processes with various underlying structures but with independent edges . As an illustration , an AR(1

  • Merlion: End-to-End Machine Learning for Time Series

    Updated: 2023-09-30 20:28:19
    : Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Merlion : End-to-End Machine Learning for Time Series Aadyot Bhatnagar , Paul Kassianik , Chenghao Liu , Tian Lan , Wenzhuo Yang , Rowan Cassius , Doyen Sahoo , Devansh Arpit , Sri Subramanian , Gerald Woo , Amrita Saha , Arun Kumar Jagota , Gokulakrishnan Gopalakrishnan , Manpreet Singh , K C Krithika , Sukumar Maddineni , Daeki Cho , Bo Zong , Yingbo Zhou , Caiming Xiong , Silvio Savarese , Steven Hoi , Huan Wang 24(226 1 6, 2023. Abstract We introduce Merlion , an open-source machine learning library for time series . It features a unified interface for many commonly used models and datasets for

  • Conditional Distribution Function Estimation Using Neural Networks for Censored and Uncensored Data

    Updated: 2023-09-30 20:28:19
    Home Page Papers Submissions News Editorial Board Special Issues Open Source Software Proceedings PMLR Data DMLR Transactions TMLR Search Statistics Login Frequently Asked Questions Contact Us Conditional Distribution Function Estimation Using Neural Networks for Censored and Uncensored Data Bingqing Hu , Bin Nan 24(223 1 26, 2023. Abstract Most work in neural networks focuses on estimating the conditional mean of a continuous response variable given a set of covariates . In this article , we consider estimating the conditional distribution function using neural networks for both censored and uncensored data . The algorithm is built upon the data structure particularly constructed for the Cox regression with time-dependent covariates . Without imposing any model assumptions , we consider a

  • ✚ Visualization Tools and Learning Resources, September 2023 Roundup

    Updated: 2023-09-28 18:30:22
    , Membership Courses Tutorials Projects Newsletter Become a Member Log in Members Only Visualization Tools and Learning Resources , September 2023 Roundup September 28, 2023 Topic The Process roundup Welcome to The Process where we look closer at how the charts get made . This is issue 258. I’m Nathan Yau . Every month I collect tools and resources to help you make better charts . Here’s the good stuff for . September To access this issue of The Process , you must be a . member If you are already a member , log in here See What You Get The Process is a weekly newsletter on how visualization tools , rules , and guidelines work in practice . I publish every Thursday . Get it in your inbox or read it on FlowingData . You also gain unlimited access to hundreds of hours worth of step-by-step

  • How to Build Pie Charts with JavaScript

    Updated: 2023-09-28 08:09:21
    The pie chart, a widely used chart type yet also a topic of debate here and there, has cemented its place in the realm of data visualization. When used appropriately, it provides an intuitive insight into the composition of data, with each slice of the pie representing a distinct component. In this tutorial, I’ll guide […] The post How to Build Pie Charts with JavaScript appeared first on AnyChart News.

  • Two maps with the same scale

    Updated: 2023-09-26 07:23:03
    Membership Courses Tutorials Projects Newsletter Become a Member Log in Two maps with the same scale September 26, 2023 Topic Maps Josh Horowitz scale When you compare two areas on a single map , it can be a challenge to compare the actual size of them because of the trade-offs with projecting a three-dimensional space onto a two-dimensional space . Josh Horowitz made a thing that automatically rescales side-by-side maps as you pan and zoom , so that you get a more accurate . comparison Related Scale of Ukrainian cities Areas still controlled by Ukraine Vintage relief maps Become a . member Support an independent site . Make great charts . See what you get Projects by FlowingData See All Literacy Scores by Country , in Reading , Math , and Science See how your country . compares How We

  • Twitter slows competitor links

    Updated: 2023-09-22 15:48:19
    Membership Courses Tutorials Projects Newsletter Become a Member Log in Twitter slows competitor links September 22, 2023 Topic Infographics slow The Markup Twitter When you click a link on Twitter , you go through a Twitter shortlink first and then to the place you want to go . When you click on a link that points to one of Twitter’s competitors , by complete coincidence I am sure , there’s a delay . For The Markup , Jon Keegan , Dan Phiffer and Joel Eastwood ran the tests You can also try it with your own . URLs I’m into the animated opening . graphic Related Oddly specific ad profiles Who pays for Twitter Unregulated location data industry Become a . member Support an independent site . Make great charts . See what you get Projects by FlowingData See All How Many Kids We Have and When

  • ✚ Calming Data

    Updated: 2023-09-21 18:30:17
    Membership Courses Tutorials Projects Newsletter Become a Member Log in Members Only Calming Data September 21, 2023 Topic The Process calm feeling insight Welcome to The Process where we look closer at how the charts get made . This is issue 257. I’m Nathan Yau . We make charts for a lot of reasons : getting status updates , finding trends , evaluating accuracy , testing hypotheses , decorating , and proving points . For me , chart-making provides a perspective that’s beyond my own , which can be . calming To access this issue of The Process , you must be a . member If you are already a member , log in here See What You Get The Process is a weekly newsletter on how visualization tools , rules , and guidelines work in practice . I publish every Thursday . Get it in your inbox or read it on

  • Elevated Data Control & Customization in AnyChart’s Latest Qlik Sense Extensions

    Updated: 2023-09-12 00:39:13
    Sales : 1 888 845-1211 USA or 44 20 7193 9444 Europe customer login Toggle navigation Products AnyChart AnyStock AnyMap AnyGantt Mobile Qlik Extension Features Resources Business Solutions Technical Integrations Chartopedia Tutorials Support Company About Us Customers Success Stories More Testimonials News Download Buy Now Search News » Qlik » Elevated Data Control Customization in AnyChart’s Latest Qlik Sense Extensions Elevated Data Control Customization in AnyChart’s Latest Qlik Sense Extensions September 12th , 2023 by AnyChart Team Prepare for an advanced level of data control and customization as we unveil the latest update for our Qlik Sense extensions The September 2023 release is dedicated to enhancing your visual analytics experience , with a particular emphasis on the

  • Stock Chart Creation in JavaScript: Step-by-Step Guide

    Updated: 2023-09-05 00:34:35
    Chances are, you’ve come across various stock charts, whether you’re a seasoned trader or not. If crafting your data graphics piques your interest, you’re in the right place. Welcome to this user-friendly tutorial on building an interactive stock chart using JavaScript! The JS stock chart we’ll create by the end of this guide will visually […] The post Stock Chart Creation in JavaScript: Step-by-Step Guide appeared first on AnyChart News.

Current Feed Items | Previous Months Items

Aug 2023 | Jul 2023 | Jun 2023 | May 2023 | Apr 2023 | Mar 2023